Prior works on improving speech quality with visual input typically study each type of auditory distortion separately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Enhancement, where the goal is not to reconstruct the exact reference clean signal, but to focus on improving certain aspects of speech. In particular, this paper concerns intelligibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is composed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual enhancement tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE greatly suppresses noise and improves quality. Project page: https://wnhsu.github.io/ReVISE.
translated by 谷歌翻译
Current self-supervised learning algorithms are often modality-specific and require large amounts of computational resources. To address these issues, we increase the training efficiency of data2vec, a learning objective that generalizes across several modalities. We do not encode masked tokens, use a fast convolutional decoder and amortize the effort to build teacher representations. data2vec 2.0 benefits from the rich contextualized target representations introduced in data2vec which enable a fast self-supervised learner. Experiments on ImageNet-1K image classification show that data2vec 2.0 matches the accuracy of Masked Autoencoders in 16.4x lower pre-training time, on Librispeech speech recognition it performs as well as wav2vec 2.0 in 10.6x less time, and on GLUE natural language understanding it matches a retrained RoBERTa model in half the time. Trading some speed for accuracy results in ImageNet-1K top-1 accuracy of 86.8\% with a ViT-L model trained for 150 epochs.
translated by 谷歌翻译
Automatic speech recognition research focuses on training and evaluating on static datasets. Yet, as speech models are increasingly deployed on personal devices, such models encounter user-specific distributional shifts. To simulate this real-world scenario, we introduce LibriContinual, a continual learning benchmark for speaker-specific domain adaptation derived from LibriVox audiobooks, with data corresponding to 118 individual speakers and 6 train splits per speaker of different sizes. Additionally, current speech recognition models and continual learning algorithms are not optimized to be compute-efficient. We adapt a general-purpose training algorithm NetAug for ASR and create a novel Conformer variant called the DisConformer (Disentangled Conformer). This algorithm produces ASR models consisting of a frozen 'core' network for general-purpose use and several tunable 'augment' networks for speaker-specific tuning. Using such models, we propose a novel compute-efficient continual learning algorithm called DisentangledCL. Our experiments show that the DisConformer models significantly outperform baselines on general ASR i.e. LibriSpeech (15.58% rel. WER on test-other). On speaker-specific LibriContinual they significantly outperform trainable-parameter-matched baselines (by 20.65% rel. WER on test) and even match fully finetuned baselines in some settings.
translated by 谷歌翻译
尽管视听模型与仅限音频模型相比可以产生卓越的性能和鲁棒性,但由于缺乏标记和未标记的视听数据以及每种方式部署一个模型的成本,它们的开发和采用受到阻碍。在本文中,我们提出了U-Hubert,这是一个自制的预训练框架,可以通过统一的蒙版群集预测目标来利用多模式和单峰语音。通过在预训练期间利用模态辍学,我们证明了一个微调模型可以在PAR上取得比较的性能或比最先进的模态特异性模型更好。此外,我们仅在音频上进行微调的模型可以通过视听和视觉语音输入来表现良好,从而实现了零击的模态概括,以实现语音识别和扬声器验证。特别是,我们的单个模型在带有音频/视听/视觉输入的LRS3上产生1.2%/1.4%/27.2%的语音识别单词错误率。
translated by 谷歌翻译
端到端的口语理解(SLU)使用单个模型直接从音频中预测意图。它有望通过利用中间文本表示中丢失的声学信息来提高助手系统的性能,并防止自动语音识别(ASR)中的级联错误。此外,在部署助手系统时,拥有一个统一模型具有效率优势。但是,具有语义解析标签的公共音频数据集有限的数量阻碍了该领域的研究进展。在本文中,我们发布了以任务为导向的语义解析(Stop)数据集,该数据集是公开可用的最大,最复杂的SLU数据集。此外,我们定义了低资源拆分,以建立有限的标记数据时改善SLU的基准。此外,除了人类录制的音频外,我们还发布了TTS生成版本,以基于端到端SLU系统的低资源域适应性的性能。最初的实验表明,端到端SLU模型的性能比级联的同行差一些,我们希望这能鼓励未来的工作。
translated by 谷歌翻译
本文调查了视听扬声器表示的自我监督的预训练,其中显示了视觉流,显示说话者的口腔区域与语音一起用作输入。我们的研究重点是视听隐藏单元BERT(AV-HUBERT)方法,该方法是最近开发的通用音频语音训练前训练框架。我们进行了广泛的实验,以探测预训练和视觉方式的有效性。实验结果表明,AV-Hubert可以很好地概括与说话者相关的下游任务,从而使标签效率提高了大约10倍的仅10倍,仅音频和视听扬声器验证。我们还表明,结合视觉信息,甚至仅仅是唇部区域,都大大提高了性能和噪声稳健性,在清洁条件下将EER降低了38%,在嘈杂的条件下将EER降低了75%。
translated by 谷歌翻译
直接语音到语音翻译(S2ST)模型与传统级联系统可用的数据量相比,几乎没有平行的S2ST数据遇到数据稀缺问题,该数据包括自动语音识别(ASR),机器翻译(MT)和文本到语音(TTS)合成。在这项工作中,我们使用未标记的语音数据和数据扩展来探索自我监督的预训练,以解决此问题。我们利用了最近提出的语音到单位翻译(S2UT)框架,该框架将目标语音编码为离散表示形式,并转移前训练前和有效的部分填充技术,可很好地适用于语音到文本翻译(S2T)通过研究语音编码器和离散单位解码器预训练,S2UT域。我们在西班牙语 - 英语翻译上进行的实验表明,与多任务学习相比,自我监督的预训练始终如一地提高模型性能,平均为6.6-12.1 BLEU增长,并且可以与数据增强技术相结合,以应用MT来创建弱监督监督的培训数据。音频样本可在以下网址获得:https://facebookresearch.github.io/speech_translation/enhanced_direct_s2st_units/index.html。
translated by 谷歌翻译
无监督的语音识别表现出了使每种语言都可以访问的自动语音识别(ASR)系统的巨大潜力。但是,现有方法仍然严重依赖手工制作的预处理。与端到端进行监督语音识别的趋势类似,我们介绍了WAV2VEC-U 2.0,它消除了所有音频端的预处理,并通过更好的体系结构提高了准确性。此外,我们引入了一个辅助自我监督的目标,该目标将模型的预测与输入联系起来。实验表明,WAV2VEC-U 2.0在概念上更简单的同时,可以改善不同语言的无监督识别结果。
translated by 谷歌翻译
基于音频的自动语音识别(ASR)在嘈杂的环境中显着降低,并且特别容易受到干扰语音的影响,因为模型无法确定要转录的扬声器。视听语音识别(AVSR)系统通过将音频流与不变噪声不变的可视信息补充,帮助模型对所需扬声器的视觉信息来提高鲁棒性。但是,以前的AVSR工作仅关注监督学习设置;因此,通过可用的标记数据量阻碍了进度。在这项工作中,我们提出了一个自我监督的AVSR框架,建立在视听休伯特(AV-HUBERT),是最先进的视听语音表示学习模型。在最大可用的AVSR基准数据集LRS3中,我们的方法在存在的情况下使用少于10%的标签数据(433HR与30HR)之前的最先进(28.0%与14.1%)优于〜50%(28.0%vs.14.1%)禁止噪声,平均减少了基于音频模型的WER以上超过75%(25.8%与5.8%)。
translated by 谷歌翻译
语音的视频录制包含相关的音频和视觉信息,为语音表示从扬声器的唇部运动和产生的声音提供了强大的信号。我们介绍了视听隐藏单元BERT(AV-HUBERT),是视听语音的自我监督的代表学习框架,这些屏幕屏蔽了多流视频输入并预测自动发现和迭代地精制多模式隐藏单元。 AV-HUBERT学习强大的视听语音表示,这些语音表示受益于唇读和自动语音识别。在最大的公众唇读基准LRS3(433小时)中,AV-Hubert达到32.5%WER,只有30个小时的标签数据,优于前一种最先进的方法(33.6%)培训,达到了一千次转录的视频数据(31k小时)。当使用来自LRS3的所有433小时的标记数据并结合自培训时,唇读WER进一步降低至26.9%。使用我们在相同的基准测试中使用您的视听表示,用于音频语音识别的相对效率为40%,而最先进的性能(1.3%Vs 2.3%)。我们的代码和模型可在https://github.com/facebookResearch/av_hubert获得
translated by 谷歌翻译